25 research outputs found

    Evaluating methodological quality of Prognostic models Including Patient-reported HeAlth outcomes in oncologY (EPIPHANY): A systematic review protocol

    Get PDF
    Introduction While there is mounting evidence of the independent prognostic value of patient-reported outcomes (PROs) for overall survival (OS) in patients with cancer, it is known that the conduct of these studies may hold a number of methodological challenges. The aim of this systematic review is to evaluate the quality of published studies in this research area, in order to identify methodological and statistical issues deserving special attention and to also possibly provide evidence-based recommendations. Methods and analysis An electronic search strategy will be performed in PubMed to identify studies developing or validating a prognostic model which includes PROs as predictors. Two reviewers will independently be involved in data collection using a predefined and standardised data extraction form including information related to study characteristics, PROs measures used and multivariable prognostic models. Studies selection will be reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, with data extraction form using fields from the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) checklist for multivariable models. Methodological quality assessment will also be performed and will be based on prespecified domains of the CHARMS checklist. As a substantial heterogeneity of included studies is expected, a narrative evidence synthesis will also be provided. Ethics and dissemination Given that this systematic review will use only published data, ethical permissions will not be required. Findings from this review will be published in peer-reviewed scientific journals and presented at major international conferences. We anticipate that this review will contribute to identify key areas of improvement for conducting and reporting prognostic factor analyses with PROs in oncology and will lay the groundwork for developing future evidence-based recommendations in this area of research. Prospero registration number CRD42018099160

    Incorporating Participants' Welfare into Sequential Multiple Assignment Randomized Trials

    Full text link
    Dynamic treatment regimes (DTRs) are sequences of decision rules that recommend treatments based on patients' time-varying clinical conditions. The sequential multiple assignment randomized trial (SMART) is an experimental design that can provide high-quality evidence for constructing optimal DTRs. In a conventional SMART, participants are randomized to available treatments at multiple stages with balanced randomization probabilities. Despite its relative simplicity of implementation and desirable performance in comparing embedded DTRs, the conventional SMART faces inevitable ethical issues including assigning many participants to the empirically inferior treatment or the treatment they dislike, which might slow down the recruitment procedure and lead to higher attrition rates, ultimately leading to poor internal and external validities of the trial results. In this context, we propose a SMART under the Experiment-as-Market framework (SMART-EXAM), a novel SMART design that holds the potential to improve participants' welfare by incorporating their preferences and predicted treatment effects into the randomization procedure. We describe the steps of conducting a SMART-EXAM and evaluate its performance compared to the conventional SMART. The results indicate that the SMART-EXAM can improve the welfare of the participants enrolled in the trial, while also achieving a desirable ability to construct an optimal DTR when the experimental parameters are suitably specified. We finally illustrate the practical potential of the SMART-EXAM design using data from a SMART for children with attention-deficit/hyperactivity disorder (ADHD)

    Algorithms for Adaptive Experiments that Trade-off Statistical Analysis with Reward: Combining Uniform Random Assignment and Reward Maximization

    Full text link
    Multi-armed bandit algorithms like Thompson Sampling (TS) can be used to conduct adaptive experiments, in which maximizing reward means that data is used to progressively assign participants to more effective arms. Such assignment strategies increase the risk of statistical hypothesis tests identifying a difference between arms when there is not one, and failing to conclude there is a difference in arms when there truly is one. We tackle this by introducing a novel heuristic algorithm, called TS-PostDiff (Posterior Probability of Difference). TS-PostDiff takes a Bayesian approach to mixing TS and Uniform Random (UR): the probability a participant is assigned using UR allocation is the posterior probability that the difference between two arms is 'small' (below a certain threshold), allowing for more UR exploration when there is little or no reward to be gained. We evaluate TS-PostDiff against state-of-the-art strategies. The empirical and simulation results help characterize the trade-offs of these approaches between reward, False Positive Rate (FPR), and statistical power, as well as under which circumstances each is effective. We quantify the advantage of TS-PostDiff in performing well across multiple differences in arm means (effect sizes), showing the benefits of adaptively changing randomization/exploration in TS in a "Statistically Considerate" manner: reducing FPR and increasing statistical power when differences are small or zero and there is less reward to be gained, while exploiting more when differences may be large. This highlights important considerations for future algorithm development and analysis to better balance reward and statistical analysis

    Using Adaptive Bandit Experiments to Increase and Investigate Engagement in Mental Health

    Full text link
    Digital mental health (DMH) interventions, such as text-message-based lessons and activities, offer immense potential for accessible mental health support. While these interventions can be effective, real-world experimental testing can further enhance their design and impact. Adaptive experimentation, utilizing algorithms like Thompson Sampling for (contextual) multi-armed bandit (MAB) problems, can lead to continuous improvement and personalization. However, it remains unclear when these algorithms can simultaneously increase user experience rewards and facilitate appropriate data collection for social-behavioral scientists to analyze with sufficient statistical confidence. Although a growing body of research addresses the practical and statistical aspects of MAB and other adaptive algorithms, further exploration is needed to assess their impact across diverse real-world contexts. This paper presents a software system developed over two years that allows text-messaging intervention components to be adapted using bandit and other algorithms while collecting data for side-by-side comparison with traditional uniform random non-adaptive experiments. We evaluate the system by deploying a text-message-based DMH intervention to 1100 users, recruited through a large mental health non-profit organization, and share the path forward for deploying this system at scale. This system not only enables applications in mental health but could also serve as a model testbed for adaptive experimentation algorithms in other domains

    Multiculturality and Interculturality: A Qualitative Analysis of the Perspective of Focus Group Participants

    No full text
    This work examines the textual content of the focus group interviews conducted as part of the project “Multiculturality and education in Pontifical universities and formation communities of consecrated life”. More specifically, it focuses on the first focus group, with an in-depth analysis of the question “In your opinion, what is the difference between multiculturality and interculturality?”. The aim is to investigate, by means of qualitative content analysis methods, participants’ understanding and perspective of the two key concepts of this project, which are often misinterpreted or interchangeably misused. Results will show that participants have a well-clear idea of the concept of multiculturality, seen as a matter of fact of cultural plurality and diversity, and characterized by a definite and static nature. They also recognize that a multicultural plurality provides an opportunity for individual growth, but it must be regulated, especially at a communicative level, to allow for a mutually tolerant and respectful coexistence, without necessarily interfering with other cultures. On the contrary, in an intercultural context, it emerges the key role of union and mutual sharing, with a strong emphasis on individual cultural transformation. In this regard, this contribution will bring light to a heterogeneous and often conflicting perspective about the intensity of such transformation. More specifically, to what extent individuals should preserve or lose their own cultural identities, as a result of the intercultural transformation process

    Reinforcement learning in modern biostatistics: benefits, challenges and new proposals

    No full text
    Applications of reinforcement learning (RL) for supporting, managing and improving decision-making are becoming increasingly popular in a variety of medicine and healthcare domains where the problem has a sequential nature. By continuously interacting with the underlying environment, RL techniques are able to learn by trial-and-error on how to take better actions in order to maximize an outcome of interest over time. However, if on one hand RL offers a new powerful framework, on the other hand it poses some unique challenges for data analysis and interpretability, which call for new statistical techniques in both predictive and descriptive learning. Notably, several methodological challenges, for which the contribution of the biostatistical community may play a crucial role, limit the use of RL in real life. In an aim to bridge the statistics and RL communities, we start by assimilating the different existing RL terminologies, notations and approaches into a coherent body of work, and by translating them from a machine learning (ML) to a statistical perspective. Then, through a comprehensive methodological review, we report and discuss the state-of-the-art RL-based research in healthcare. Two main applied domains emerged: 1) adaptive interventions (AIs), encompassing both dynamic treatment regimes and just-in-time adaptive interventions in mobile health (mHealth); and 2) adaptive designs of clinical trials, specifically dose-finding designs and adaptive randomization. We illustrate existing RL-based methods in these areas, discussing their benefits and existing open problems that may impact their application in real life. A major barrier to adopting RL in real-world experiments is the lack of clarity on how statistical analyses and inference are impacted. In clinical trials for example, if on one side, to achieve the practical (and more ethical) goal of improving patients’ benefits, RL may have better abilities in terms of maximising clinical outcomes by adaptively randomizing participants to the best evidence-based treatment; on the other side, to achieve the scientific goal of e.g., discovering whether one treatment is more effective compared to a control treatment, less is known about their inferential properties. Through a simulation study, we investigate the challenges of conducting hypothesis testing from data collected through a class of RL, i.e., multi-armed bandits (MABs), outlining the harms MAB algorithms can cause to traditional statistical tests’ type-I error and power. This empirical evaluation provides guidance to two alternative ways of pursuing improved statistical hypothesis testing: 1) to explore ways of modifying the test statistic using knowledge of the adaptive data collection nature; 2) to modify the algorithm or framework for a more sensitive problem to both statistical inference as well as reward maximization. Focusing on the Thompson Sampling (a randomized MAB strategy), we show how a modified version of it results in an optimal intermediate between these two objectives. These findings can provide insights into how challenges can be surmounted by bridging machine learning, statistics, and applied sciences, to conduct adaptive experiments in the real-world, aiming to simultaneously help individuals and advance scientific research. We finally combine our methodological knowledge with a motivating mHealth study for improving physical activity, to illustrate the tremendous collaboration opportunities between statistics and RL researchers in the space of developing adaptive interventions into the increasingly growing area of mHealth

    Multinomial Thompson Sampling for adaptive experiments with rating scales

    No full text
    Bandit algorithms such as Thompson Sampling (TS) have been put forth for decades as useful for conducting adaptively-randomized experiments. By skewing the allocation ratio towards superior arms, they can substantially improve participants’ welfare with respect to particular outcomes of interest. For example, as we illustrate in this work, they may use participants’ ratings for understanding and assigning promising text messages for managing mental health issues more often. However, model-based algorithms such as TS, typically assume binary or normal models, which may lead to suboptimal performances in categorical rating scale outcomes. Guided by our field experiment, we extend the application of TS to rating scale data and show its improved performance in a number of synthetic experiments

    Reinforcement learning for sequential decision making in population research

    No full text
    Reinforcement learning (RL) algorithms have been long recognized as powerful tools for optimal sequential decision making. The framework is concerned with a decision maker, the agent, that learns how to behave in an unknown environment by making decisions and seeing their associated outcome. The goal of the RL agent is to infer, through repeated experience, an optimal decision-making policy, i.e., a sequence of action rules that would lead to the highest, typically long-term, expected utility. Today, a wide range of domains, from economics to education and healthcare, have embraced the use of RL to address specific problems. To illustrate, we used an RL-based algorithm to design a text-messaging system that delivers personalized real-time behavioural recommendations to promote physical activity and manage depression. Motivated by the recent call of the UNECE for government-wide actions to adapt to population ageing, in this work, we argue that the RL framework may provide a set of compelling strategies for supporting population research and informing population policies. After introducing the RL framework, we discuss its potential in three population-study applications: international migration, public health, and fertility

    Adaptive Experiments for Enhancing Digital Education: Benefits and Statistical Challenges

    No full text
    Adaptive digital field experiments are continually increasing in their breadth of use in fields like mobile health and digital education. Using adaptiveexperimentation in education can help not only to explore and eventually compare various arms but also to direct more students to more helpful options. For example, they might explore whether one explanation type (e.g., critiquing an existing explanation or revising one’s own explanation) would lead students to gain a better understanding of a scientific concept and assign it more often. In such an experiment, data is rapidly and automatically analyzed to increase the proportion of futureparticipants in the study allocated to better arms.One way of implementing adaptivity is through algorithms designed to solve multi-armed bandit (MAB) problems, such as Thompson Sampling (TS). The MAB problem is to effectively choose among K available options or arms, in order to maximize the expected outcome of interest (or reward). In this work, we present real-world case studies of applying TS in education. Specifically, we explore its use for motivating students, through different email reminders, tofinalize their online homework. To evaluate the potential of MAB in education, we leverage the power of simulations to further explore the behavior of TS both when there is no difference between arms and when some difference exists. We empirically show that, while adaptive experiments can result in an increased benefit for students, by assigning more people to better arms, they can also cause problems for statistical analysis. Notably, this assignment strategy suggests an alert in drawing statistical conclusions, resulting in an inflated Type I error and a decreased power (failure to conclude there is a difference in arms when there truly is one). We explain why this happens and propose some strategies to mitigate these issues, in the hope to provide building blocks for future research to better balance the competing goals of reward maximization and statistical inference
    corecore